Most research studying social determinants of health (SDoH) has focused on physician notes or structured elements of the electronic medical record (EMR). We hypothesize that clinical notes from social workers, whose role is to ameliorate social and economic factors, might provide a richer source of data on SDoH. We sought to perform topic modeling to identify robust topics of discussion within a large cohort of social work notes. We retrieved a diverse, deidentified corpus of 0.95 million clinical social work notes from 181,644 patients at the University of California, San Francisco. We used word frequency analysis and Latent Dirichlet Allocation (LDA) topic modeling analysis to characterize this corpus and identify potential topics of discussion. Word frequency analysis identified both medical and non-medical terms associated with specific ICD10 chapters. The LDA topic modeling analysis extracted 11 topics related to social determinants of health risk factors including financial status, abuse history, social support, risk of death, and mental health. In addition, the topic modeling approach captured the variation between different types of social work notes and across patients with different types of diseases or conditions. We demonstrated that social work notes contain rich, unique, and otherwise unobtainable information on an individual's SDoH.
translated by 谷歌翻译
Zero-shot detection (ZSD) is a challenging task where we aim to recognize and localize objects simultaneously, even when our model has not been trained with visual samples of a few target ("unseen") classes. Recently, methods employing generative models like GANs have shown some of the best results, where unseen-class samples are generated based on their semantics by a GAN trained on seen-class data, enabling vanilla object detectors to recognize unseen objects. However, the problem of semantic confusion still remains, where the model is sometimes unable to distinguish between semantically-similar classes. In this work, we propose to train a generative model incorporating a triplet loss that acknowledges the degree of dissimilarity between classes and reflects them in the generated samples. Moreover, a cyclic-consistency loss is also enforced to ensure that generated visual samples of a class highly correspond to their own semantics. Extensive experiments on two benchmark ZSD datasets - MSCOCO and PASCAL-VOC - demonstrate significant gains over the current ZSD methods, reducing semantic confusion and improving detection for the unseen classes.
translated by 谷歌翻译
Deep learning has been the subject of growing interest in recent years. Specifically, a specific type called Multimodal learning has shown great promise for solving a wide range of problems in domains such as language, vision, audio, etc. One promising research direction to improve this further has been learning rich and robust low-dimensional data representation of the high-dimensional world with the help of large-scale datasets present on the internet. Because of its potential to avoid the cost of annotating large-scale datasets, self-supervised learning has been the de facto standard for this task in recent years. This paper summarizes some of the landmark research papers that are directly or indirectly responsible to build the foundation of multimodal self-supervised learning of representation today. The paper goes over the development of representation learning over the last few years for each modality and how they were combined to get a multimodal agent later.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
基于会话的建议旨在根据持续的会话预测用户的下一个行为。先前的作品是将会话建模为一系列项目的变量长度,并学习单个项目和汇总会话的表示。最近的研究应用了图形神经网络,具有注意机制,通过将会话建模为图形结构化数据来捕获复杂的项目过渡和依赖性。但是,他们仍然在数据和学习方法方面面临着根本的挑战,例如稀疏监督信号和会议中的嘈杂互动,从而导致次优性能。在本文中,我们提出了SR-GCL,这是一个基于会话建议的新型对比学习框架。作为对比学习的关键组成部分,我们提出了两种全球环境增强的数据增强方法,同时保持原始会话的语义。与其他最先进的方法相比,两个现实世界电子商务数据集的广泛实验结果证明了SR-GCL的优势。
translated by 谷歌翻译
时间序列数据在现实世界应用中无处不在。但是,最常见的问题之一是,时间序列数据可能会通过数据收集过程的固有性质丢失值。因此,必须从多元(相关)时间序列数据中推出缺失值,这对于改善预测性能的同时做出准确的数据驱动决策至关重要。插补的常规工作简单地删除缺失值或基于平均/零填充它们。尽管基于深层神经网络的最新作品显示出了显着的结果,但它们仍然有一个限制来捕获多元时间序列的复杂生成过程。在本文中,我们提出了一种用于多变量时间序列数据的新型插补方法,称为sting(使用GAN基于自我注意的时间序列插补网络)。我们利用生成的对抗网络和双向复发性神经网络来学习时间序列的潜在表示。此外,我们引入了一种新型的注意机制,以捕获整个序列的加权相关性,并避免无关序列带来的潜在偏见。三个现实世界数据集的实验结果表明,刺痛在插补精度以及具有估算值的下游任务方面优于现有的最新方法。
translated by 谷歌翻译
表格数据通常包含私人和重要信息;因此,必须在与他人共享之前采取预防措施。尽管已经提出了几种方法(例如,差异隐私和K-匿名性)以防止信息泄漏,但近年来,表格数据合成模型已变得流行,因为它们可以在数据实用程序和隐私之间进行易于权衡。但是,最近的研究表明,图像数据的生成模型容易受到会员推理攻击的影响,这可以确定是否使用给定记录来训练受害者合成模型。在本文中,我们在表格数据合成的背景下研究了成员推理攻击。我们在两个攻击方案(即一个黑色框和一个白盒攻击)下对4个最先进的表格数据合成模型进行实验,并发现成员推理攻击会严重危害这些模型。下一步,我们进行实验,以评估两种流行的差异深度学习训练算法DP-SGD和DP-GAN如何能够保护模型免受攻击。我们的主要发现是,两种算法都可以通过牺牲生成质量来减轻这种威胁。代码和数据可用:https://github.com/jayoungkim408/mia
translated by 谷歌翻译